skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, January 15 until 2:00 AM ET on Friday, January 16 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Riedl, Mark"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. PurposeChallenges in teaching the engineering design process (EDP) at the high-school level, such as promoting good documentation practices, are well-documented. While developments in educational artificial intelligence (AI) systems have the potential to assist in addressing these challenges, the open-ended nature of the EDP leads to challenges that often lack the specificity required for actionable AI development. In addition, conventional educational AI systems (e.g. intelligent tutoring systems) primarily target procedural domain tasks with well-defined outcomes and problem-solving strategies, while the EDP involves open-ended problems and multiple correct solutions, making AI intervention timing and appropriateness complex. Design/methodology/approachAuthors conducted a six-week-long Research through Co-Design (RtCD) process (i.e. a co-design process rooted in Research through Design) with two experienced high-school engineering teachers to co-construct actionable insight in the form of AI intervention points (AI-IPs) in engineering education where an AI system can effectively intervene to support them while highlighting their pedagogical practices. FindingsThis paper leveraged the design of task models to iteratively refine our prior understanding of teachers’ experiences with teaching the EDP into three AI-IPs related to documentation, ephemeral interactions between teachers and students and disruptive failures that can serve as a focus for intelligent educational system designs. Originality/valueThis paper discusses the implications of these AI-IPs for designing educational AI systems to support engineering education as well as the importance of leveraging RtCD methodologies to engage teachers in developing intelligent educational systems that align with their needs and afford them control over computational interventions in their classrooms. 
    more » « less
    Free, publicly-accessible full text available September 19, 2026
  2. Abstract Reinforcement learning (RL) systems can be complex and non-interpretable, making it challenging for non-AI experts to understand or intervene in their decisions. This is due in part to the sequential nature of RL in which actions are chosen because of their likelihood of obtaining future rewards. However, RL agents discard the qualitative features of their training, making it difficult to recover user-understandable information for “why” an action is chosen. We propose a techniqueExperiential Explanationsto generate counterfactual explanations by traininginfluence predictorsalong with the RL policy. Influence predictors are models that learn how different sources of reward affect the agent in different states, thus restoring information about how the policy reflects the environment. Two human evaluation studies revealed that participants presented with Experiential Explanations were better able to correctly guess what an agent would do than those presented with other standard types of explanation. Participants also found that Experiential Explanations are more understandable, satisfying, complete, useful, and accurate. Qualitative analysis provides information on the factors of Experiential Explanations that are most useful and the desired characteristics that participants seek from the explanations. 
    more » « less
    Free, publicly-accessible full text available April 12, 2026
  3. Explainable AI (XAI) systems are sociotechnical in nature; thus, they are subject to the sociotechnical gap-divide between the technical affordances and the social needs. However, charting this gap is challenging. In the context of XAI, we argue that charting the gap improves our problem understanding, which can reflexively provide actionable insights to improve explainability. Utilizing two case studies in distinct domains, we empirically derive a framework that facilitates systematic charting of the sociotechnical gap by connecting AI guidelines in the context of XAI and elucidating how to use them to address the gap. We apply the framework to a third case in a new domain, showcasing its affordances. Finally, we discuss conceptual implications of the framework, share practical considerations in its operationalization, and offer guidance on transferring it to new contexts. By making conceptual and practical contributions to understanding the sociotechnical gap in XAI, the framework expands the XAI design space. 
    more » « less